28 research outputs found

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Exploration of virtual and augmented reality for visual analytics and 3D volume rendering of functional magnetic resonance imaging (fMRI) data

    Get PDF
    Statistical analysis of functional magnetic resonance imaging (fMRI), such as independent components analysis, is providing new scientific and clinical insights into the data with capabilities such as characterising traits of schizophrenia. However, with existing approaches to fMRI analysis, there are a number of challenges that prevent it from being fully utilised, including understanding exactly what a 'significant activity' pattern is, which structures are consistent and different between individuals and across the population, and how to deal with imaging artifacts such as noise. Interactive visual analytics has been presented as a step towards solving these challenges by presenting the data to users in a way that illuminates meaning. This includes using circular layouts that represent network connectivity and volume renderings with 'in situ' network diagrams. These visualisations currently rely on traditional 2D 'flat' displays with mouse-and-keyboard input. Due to the constrained screen space and an implied concept of depth, they are limited in presenting a meaningful, uncluttered abstraction of the data without compromising on preserving anatomic context. In this paper, we present our ongoing research on fMRI visualisation and discuss the potential for virtual reality (VR) and augmented reality (AR), coupled with gesture-based inputs to create an immersive environment for visualising fMRI data. We suggest that VR/AR can potentially overcome the identified challenges by allowing for a reduction in visual clutter and by allowing users to navigate the data abstractions in a 'natural' way that lets them keep their focus on the visualisations. We present exploratory research we have performed in creating immersive VR environments for fMRI data

    Visibility-driven PET-CT Visualisation with Region of Interest (ROI) Segmentation

    Get PDF
    Multi-modality positron emission tomography – computed tomography (PET-CT) visualises biological and physiological functions (from PET) as region of interests (ROIs) within a higher resolution anatomical reference frame (from CT). The need to efficiently assess and assimilate the information from these co-aligned volumes simultaneously has stimulated new visualisation techniques that combine 3D volume rendering with interactive transfer functions to enable efficient manipulation of these volumes. However, in typical multi-modality volume rendering visualisation, the transfer functions for the volumes are manipulated in isolation with the resulting volumes being fused, thus failing to exploit the spatial correlation that exists between the aligned volumes. Such lack of feedback makes multi-modality transfer function manipulation to be complex and time-consuming. Further, transfer function alone is often insufficient to select the ROIs when it comprises of similar voxel properties to those of non-relevant regions. In this study, we propose a new ROI-based multi-modality visibility-driven transfer function (m2-vtf) for PET-CT visualisation. We present a novel ‘visibility’ metrics, a fundamental optical property that represents how much of the ROIs are visible to the users, and use it to measure the visibility of the ROIs in PET in relation to how it is affected by transfer function manipulations to its counterpart CT. To overcome the difficulty in ROI selection, we provide an intuitive ROIs selection tool based on automated PET segmentation. We further present a multi-modality transfer function automation where the visibility metrics from the PET ROIs are used to automate its CT’s transfer function. Our GPU implementation achieved an interactive visualisation of multi-modality PET-CT with efficient and intuitive transfer function manipulations

    A Smart Checkpointing Scheme for Improving the Reliability of Clustering Routing Protocols

    Get PDF
    In wireless sensor networks, system architectures and applications are designed to consider both resource constraints and scalability, because such networks are composed of numerous sensor nodes with various sensors and actuators, small memories, low-power microprocessors, radio modules, and batteries. Clustering routing protocols based on data aggregation schemes aimed at minimizing packet numbers have been proposed to meet these requirements. In clustering routing protocols, the cluster head plays an important role. The cluster head collects data from its member nodes and aggregates the collected data. To improve reliability and reduce recovery latency, we propose a checkpointing scheme for the cluster head. In the proposed scheme, backup nodes monitor and checkpoint the current state of the cluster head periodically. We also derive the checkpointing interval that maximizes reliability while using the same amount of energy consumed by clustering routing protocols that operate without checkpointing. Experimental comparisons with existing non-checkpointing schemes show that our scheme reduces both energy consumption and recovery latency

    Remote Interactive Surgery Platform (RISP): Proof of Concept for an Augmented-Reality-Based Platform for Surgical Telementoring

    Full text link
    The "Remote Interactive Surgery Platform" (RISP) is an augmented reality (AR)-based platform for surgical telementoring. It builds upon recent advances of mixed reality head-mounted displays (MR-HMD) and associated immersive visualization technologies to assist the surgeon during an operation. It enables an interactive, real-time collaboration with a remote consultant by sharing the operating surgeon's field of view through the Microsoft (MS) HoloLens2 (HL2). Development of the RISP started during the Medical Augmented Reality Summer School 2021 and is currently still ongoing. It currently includes features such as three-dimensional annotations, bidirectional voice communication and interactive windows to display radiographs within the sterile field. This manuscript provides an overview of the RISP and preliminary results regarding its annotation accuracy and user experience measured with ten participants

    A Web-Based Medical Multimedia Visualisation Interface for Personal Health Records

    Get PDF
    The healthcare industry has begun to utilise web-based systems and cloud computing infrastructure to develop an increasing array of online personal health record (PHR) systems. Although these systems provide the technical capacity to store and retrieve medical data in various multimedia formats, including images, videos, voice, and text, individual patient use remains limited by the lack of intuitive data representation and visualisation techniques. As such, further research is necessary to better visualise and present these records, in ways that make the complex medical data more intuitive. In this study, we present a web-based PHR visualisation system, called the 3D medical graphical avatar (MGA), which was designed to explore web based delivery of a wide array of medical data types including multi-dimensional medical images; medical videos; text-based data; and spatial annotations. Mapping information was extracted from each of the data types and was used to embed spatial and textual annotations, such as regions of interest (ROIs) and time-based video annotations. Our MGA itself is built from clinical patient imaging studies, when available. We have taken advantage of the emerging web technologies of HTML5 and WebGL to make our application available to a wider base of users and devices. We analysed the performance of our proof-of concept prototype system on mobile and desktop consumer devices. Our initial experiments indicate that our system can render the medical data in a fashion that enables interactive navigation of the MGA

    A Mobility-Aware Adaptive Duty Cycling Mechanism for Tracking Objects during Tunnel Excavation

    No full text
    Tunnel construction workers face many dangers while working under dark conditions, with difficult access and egress, and many potential hazards. To enhance safety at tunnel construction sites, low latency tracking of mobile objects (e.g., heavy-duty equipment) and construction workers is critical for managing the dangerous construction environment. Wireless Sensor Networks (WSNs) are the basis for a widely used technology for monitoring the environment because of their energy-efficiency and scalability. However, their use involves an inherent point-to-point delay caused by duty cycling mechanisms that can result in a significant rise in the delivery latency for tracking mobile objects. To overcome this issue, we proposed a mobility-aware adaptive duty cycling mechanism for the WSNs based on object mobility. For the evaluation, we tested this mechanism for mobile object tracking at a tunnel excavation site. The evaluation results showed that the proposed mechanism could track mobile objects with low latency while they were moving, and could reduce energy consumption by increasing sleep time while the objects were immobile

    Impact of Mobility on Routing Energy Consumption in Mobile Sensor Networks

    No full text
    Mobility in mobile sensor networks causes frequent route breaks, and each routing scheme reacts differently during route breaks. It results in a performance degradation of the energy consumption to reestablish the route. Since routing schemes have various operational characteristics for rerouting, the impact of mobility on routing energy consumption shows significantly different results under varying network dynamics. Therefore, we should consider the mobility impact when analyzing the routing energy consumption in mobile sensor networks. However, most analysis of the routing energy consumption concentrates on the traffic condition and often neglects the mobility impact. We analyze the mobility impact on the routing energy consumption by deriving the expected energy consumption of reactive, proactive, and flooding scheme as a function of both the packet arrival rate and topology change rate. Routing energy consumption for mobile sensor networks is analytically shown to have a strong relationship with sensor mobility and traffic conditions. We then demonstrate the accuracy of our analysis through simulations. Our analysis can be used to decide a routing scheme that will operate most energy efficiently for a sensor application, taking into account the mobility as well as traffic condition

    An Entropy Analysis-Based Window Size Optimization Scheme for Merging LiDAR Data Frames

    No full text
    LiDAR is a useful technology for gathering point cloud data from its environment and has been adapted to many applications. We use a cost-efficient LiDAR system attached to a moving object to estimate the location of the moving object using referenced linear structures. In the stationary state, the accuracy of extracting linear structures is low given the low-cost LiDAR. We propose a merging scheme for the LiDAR data frames to improve the accuracy by using the movement of the moving object. The proposed scheme tries to find the optimal window size by means of an entropy analysis. The optimal window size is determined by finding the minimum point between the entropy indicator of the ideal result and the entropy indicator of the actual result of each window size. The proposed indicator can describe the accuracy of the entire path of the moving object at each window size using a simple single value. The experimental results show that the proposed scheme can improve the linear structure extraction accuracy

    A Web-based Multidisciplinary Team Meeting Visualisation System

    No full text
    Purpose Multidisciplinary team meetings (MDTs) are the standard of care for safe, effective patient management in modern hospital-based clinical practice. Medical imaging data are often the central discussion points in many MDTs and these data are typically visualised, by all participants, on a common large display. We propose a Web-based MDT visualisation system (WMDT-VS) to allow individual participants to view the data on their own personal computing devices with the potential to customise the imaging data, i.e., different view of the data to that of the common display, for their particular clinical perspective. Methods We developed the WMDT-VS by leveraging the state-of-the-art Web technologies to support four MDT visualisation features: (i) 2D and 3D visualisations for multiple imaging modality data; (ii) a variety of personal computing devices, e.g. smartphone, tablets, laptops and PCs, to access and navigate medical images individually and share the visualisations; (iii) customised participant visualisations and, (iv) the addition of extra local image data for visualisation and discussion. Results We outlined these MDT visualisation features on two simulated MDT settings using different imaging data and usage scenarios. We measured compatibility and performances of various personal, consumer-level, computing devices. Conclusions Our WMDT-VS provides a more comprehensive visualisation experience for MDT participants
    corecore